In order to achieve the optimal performance of gait pattern recognition and reconstruction of non-sparse acceleration data from Wireless Body Area Networks (WBANs)-based telemonitoring, a novel approach to apply the Block Sparse Bayesian Learning (BSBL) algorithm for improving the reconstruction performance of non-sparse accelerometer data was proposed, which contributes to achieve the superior performance of gain pattern recognition. Its basic idea is that, in view of the gait pattern and Compressed Sensing (CS) framework of WBAN-based telemonitoring, the original acceleration-based data acquired at sensor node in WBAN was compressed only by spare measurement matrix (the simple linear projection algorithm), and the compressed data was transmitted to the remote terminal, where BSBL algorithm was used to perfectly recover the non-sparse acceleration data that assumed as block structure by exploiting intra-block correlation for further gait pattern recognition with high accuracy. The acceleration data from the open USC-HAD database including walking, running, jumping, upstairs and downstairs activities were employed for testing the effectiveness of the proposed method. The experiment results show that with acceleration-based data, the reconstruction performance of the proposed BSBL algorithm can significantly outperform some conventional CS algorithms for sparse data, and the best accuracy of 98% can be obtained by BSBL-based Support Vector Machine (SVM) classifier for gait pattern recognition. These results demonstrate that the proposed method not only can significantly improve the reconstruction performance of non-sparse acceleration data for further gait pattern recognition with high accuracy but also is very helpful for the design of low-cost sensor node hardware with lower energy consumption, which will be a potential approach for the energy-efficient WBAN-based telemonitoring of human gait pattern in further application.
In view of the problem that how to reduce the storage space of the trajectory data and improve the speed of data analysis and transmission in the Global Positioning System (GPS), a hybrid trajectory compression algorithm based on the multiple spatiotemporal characteristics was proposed in this paper. On the one hand, in the algorithm, a new online trajectory compression strategy based on the multiple spatiotemporal characteristics was adopted in order to choose the characteristic points more accurately by using the position, direction and speed information of GPS point. On the other hand, the hybrid trajectory compression strategy which combined online compression with batched compression was used, and the Douglas batched compression algorithm was adopted to do the second compression process of the hybrid trajectory compression. The experimental results show that the compression error of the new online trajectory compression strategy based on multiple spatiotemporal characteristics reduces significantly, although the compression ratio fells slightly compared with the existing spatiotemporal compression algorithm. By choosing appropriate cycle time of batching, the compression ratio and compression error of this algorithm are improved compared with the existing spatiotemporal compression algorithm.
Aiming at the problem that the classification accuracy in malware behavior analysis system was low,a malware classification method based on Support Vector Machine (SVM) was proposed. First, the risk behavior library which used software behavior results as characteristics was established manually. Then all of the software behaviors were captured and matched with the risk behavior library, and the matching results were converted to data suitable for SVM training through the conversion algorithm. In the selection of the SVM model, kernel function and parameters (C,g), a method combining the grid search and Genetic Algorithm (GA) was used to search optimization after theoretical analysis. A malware behavior assessment system based on SVM classification model was designed to verify the effectiveness of the proposed malware classification method. The experiments show that the false positive rate and false negative rate of the system were 5.52% and 3.04% respectively. It means that the proposed method outperforms K-Nearest Neighbor (KNN) and Naive Bayes (NB); its performance is at the same level with the BP neural network, however, it has a higer efficiency in training and classification.
Since the face images might be not over-complete and they might be also corrupted under different viewpoints or different lighting conditions with noise, an efficient and effective method for Face Recognition (FR) was proposed, namely Robust Principal Component Analysis with Collaborative Representation based Classification (RPCA_CRC). Firstly, the face training dictionary D0 was decomposed into two matrices as the low-rank matrix D and the sparse error matrix E; Secondly, the test image could be collaboratively represented based on the low-rank matrix D; Finally, the test image was classified by the reconstruction error. Compared with SRC (Sparse Representation based Classification), the speed of RPCA_CRC on average is 25-times faster. Meanwhile, the recognition rate of RPCA_CRC increases by 30% with less training images. The experimental results show the proposed method is fast, effective and accurate.
The objective of Blind Source Separation (BSS) is to restore the unobservable source signals from their mixtures without knowing the prior knowledge of the mixing process. It is considered that the potential source signals are spatially uncorrelated but temporally correlated, i.e. they have non-vanishing temporal structure. A second-order statistics based BSS method was proposed for such sources. The robust prewhitening was firstly performed on the observed mixing signals, where the dimension of the sources was estimated based on the Minimum Description Length (MDL) criterion. Then, the blind separation was realized by implementing the Singular Value Decomposition (SVD) on the time-delayed covariance matrix of the whitened signals. The simulation on separation of a group of speech signals proves the effectiveness of the algorithm, and the performance of the algorithm was measured by Signal-to-Interference Ratio (SIR) and Performance Index (PI).
In order to solve the difficult problem that the different number of singular values affects the accuracy of fault identification, caused by Singular Value Decomposition (SVD) for different signals. A fault diagnosis method based on dual SVD and Least Squares Support Vector Machine (LS-SVM) was put forward. The proposed method could adaptively choose effective singular values by using the curvature spectrum of singular values for reconstructing a signal. SVD was carried out again to acquire the same number of orthogonal components and its energy entropy was calculated to construct the feature vector. Finally, it could be used in the LS-SVM classification model for fault identification. Compared with the method of using limited principal singular values as feature vector, the results show that the proposed method applied to the bearing fault diagnosis improves the accuracy of 13.34%. Also, it is feasible and valid.
Addressing the issue of computer processing in the internal spherical screen projection, an internal spherical screen projection algorithm was proposed based on virtual spherical transform and virtual fisheye lens mapping. Concerning the spherical screen output distortion caused by irregular fisheye projection, a sextic polynomial of distortion correction algorithm based on the equal-solid-angle mapping function was presented to approximate any fisheye mapping function to eliminate the distortion. The six coefficients of the polynomial could be obtained via solving a linear algebra equation. The experimental results show this method is able to completely eradicate the spherical screen projection distortion. Addressing the illumination distribution modification stemming from the spherical screen projection, an illumination correction algorithm based on the cosine of the projecting angles was also proposed to eliminate the illumination distribution change. The experimental results show the illumination distribution correction method successfully recovers the originally severely modified illumination distribution into the illumination distribution almost identical to the original picture. This algorithm has theoretical instructive importance and significant practical application values for design and software development of the spherical projection systems.